7 research outputs found

    Interpretable deep learning for guided structure-property explorations in photovoltaics

    Full text link
    The performance of an organic photovoltaic device is intricately connected to its active layer morphology. This connection between the active layer and device performance is very expensive to evaluate, either experimentally or computationally. Hence, designing morphologies to achieve higher performances is non-trivial and often intractable. To solve this, we first introduce a deep convolutional neural network (CNN) architecture that can serve as a fast and robust surrogate for the complex structure-property map. Several tests were performed to gain trust in this trained model. Then, we utilize this fast framework to perform robust microstructural design to enhance device performance.Comment: Workshop on Machine Learning for Molecules and Materials (MLMM), Neural Information Processing Systems (NeurIPS) 2018, Montreal, Canad

    Interpretable deep learning for guided microstructure-property explorations in photovoltaics

    Get PDF
    The microstructure determines the photovoltaic performance of a thin film organic semiconductor film. The relationship between microstructure and performance is usually highly non-linear and expensive to evaluate, thus making microstructure optimization challenging. Here, we show a data-driven approach for mapping the microstructure to photovoltaic performance using deep convolutional neural networks. We characterize this approach in terms of two critical metrics, its generalizability (has it learnt a reasonable map?), and its intepretability (can it produce meaningful microstructure characteristics that influence its prediction?). A surrogate model that exhibits these two features of generalizability and intepretability is particularly useful for subsequent design exploration. We illustrate this by using the surrogate model for both manual exploration (that verifies known domain insight) as well as automated microstructure optimization. We envision such approaches to be widely applicable to a wide variety of microstructure-sensitive design problems

    A study of interpretability mechanisms for deep networks

    No full text
    Deep neural networks are traditionally considered to be “black-box” models where it is generally difficult to interpret a certain decision made by such models given a test instance. However, as deep learning is increasingly becoming the tool of choice in making many safety-critical and time-critical decisions such as perception for self-driving cars, the machine learning community has been extremely interested recently to build interpretation mechanisms for these so called black box deep learning models primarily to build users’ trust with the models. Many such mechanisms have been developed to explain behavior of deep models such as convolutional neural networks (CNNs) and provide visual interpretations of their classification decisions. However, there is still no consensus in the community about the specific goals and performance metrics for the interpretability mechanisms. In this thesis, we review the recent literature to arrive at a formal definition for the “Interpretability-problem” for CNNs with the help of different axioms. We observe that many recently proposed mechanisms do not adhere to the axioms of interpretability and hence not quite robust in performance. In this context, we propose a framework to test the interpretation algorithms under model perturbation and data perturbation. This framework tests the “sensitivity” of the algorithms and helps in evaluating “implementation invariance”, which are desired characteristics for any interpretability mechanism. We demonstrate our framework using two well-known algorithms namely “Saliency Maps” and “Grad-CAM” and introduce a new interpretability technique called “Forward-Backward Interpretability algorithm” that provides a systematic framework for visualizing information flow in deep networks. Finally, we also present visualization and interpretability results for an impactful scientific application involving microstructure-property mapping in material science.</p

    Interpretable deep learning for guided microstructure-property explorations in photovoltaics

    Get PDF
    The microstructure determines the photovoltaic performance of a thin film organic semiconductor film. The relationship between microstructure and performance is usually highly non-linear and expensive to evaluate, thus making microstructure optimization challenging. Here, we show a data-driven approach for mapping the microstructure to photovoltaic performance using deep convolutional neural networks. We characterize this approach in terms of two critical metrics, its generalizability (has it learnt a reasonable map?), and its intepretability (can it produce meaningful microstructure characteristics that influence its prediction?). A surrogate model that exhibits these two features of generalizability and intepretability is particularly useful for subsequent design exploration. We illustrate this by using the surrogate model for both manual exploration (that verifies known domain insight) as well as automated microstructure optimization. We envision such approaches to be widely applicable to a wide variety of microstructure-sensitive design problems.This article is published as Pokuri, Balaji Sesha Sarath, Sambuddha Ghosal, Apurva Kokate, Soumik Sarkar, and Baskar Ganapathysubramanian. "Interpretable deep learning for guided microstructure-property explorations in photovoltaics." npj Computational Materials 5 (2019): 95. DOI: 10.1038/s41524-019-0231-y. Posted with permission.</p
    corecore